在这项工作中,提出了两种机器学习方法的整合,即适应和可解释的AI,以解决这两个广义检测和解释性的问题。首先,域名对抗神经网络(DANN)在多个社交媒体平台上开发了广义的错误信息检测器,DANN用于为具有相关但看不见的数据的测试域生成分类结果。基于DANN的模型是一种传统的黑盒模型,无法证明其结果合理,即目标域的标签。因此,应用了可解释的局部模型 - 反应解释(LIME)可解释的AI模型来解释DANN模式的结果。为了证明这两种方法及其进行有效解释的广义检测的整合,Covid-19的错误信息被认为是案例研究。我们尝试了两个数据集,分别是CoAid和Misovac,并比较了有或没有DANN实施的结果。 Dann显着提高了精度测量F1分类评分,并提高了准确性和AUC性能。获得的结果表明,所提出的框架在域移动的情况下表现良好,可以学习域名特征,同时使用石灰实现解释目标标签,从而实现可信赖的信息处理和提取,从而有效地打击错误信息。
translated by 谷歌翻译
The Government of Kerala had increased the frequency of supply of free food kits owing to the pandemic, however, these items were static and not indicative of the personal preferences of the consumers. This paper conducts a comparative analysis of various clustering techniques on a scaled-down version of a real-world dataset obtained through a conjoint analysis-based survey. Clustering carried out by centroid-based methods such as k means is analyzed and the results are plotted along with SVD, and finally, a conclusion is reached as to which among the two is better. Once the clusters have been formulated, commodities are also decided upon for each cluster. Also, clustering is further enhanced by reassignment, based on a specific cluster loss threshold. Thus, the most efficacious clustering technique for designing a food kit tailored to the needs of individuals is finally obtained.
translated by 谷歌翻译
Large language models (LLMs) have exploded in popularity in the past few years and have achieved undeniably impressive results on benchmarks as varied as question answering and text summarization. We provide a simple new prompting strategy that leads to yet another supposedly "super-human" result, this time outperforming humans at common sense ethical reasoning (as measured by accuracy on a subset of the ETHICS dataset). Unfortunately, we find that relying on average performance to judge capabilities can be highly misleading. LLM errors differ systematically from human errors in ways that make it easy to craft adversarial examples, or even perturb existing examples to flip the output label. We also observe signs of inverse scaling with model size on some examples, and show that prompting models to "explain their reasoning" often leads to alarming justifications of unethical actions. Our results highlight how human-like performance does not necessarily imply human-like understanding or reasoning.
translated by 谷歌翻译
We present a method for controlling a swarm using its spectral decomposition -- that is, by describing the set of trajectories of a swarm in terms of a spatial distribution throughout the operational domain -- guaranteeing scale invariance with respect to the number of agents both for computation and for the operator tasked with controlling the swarm. We use ergodic control, decentralized across the network, for implementation. In the DARPA OFFSET program field setting, we test this interface design for the operator using the STOMP interface -- the same interface used by Raytheon BBN throughout the duration of the OFFSET program. In these tests, we demonstrate that our approach is scale-invariant -- the user specification does not depend on the number of agents; it is persistent -- the specification remains active until the user specifies a new command; and it is real-time -- the user can interact with and interrupt the swarm at any time. Moreover, we show that the spectral/ergodic specification of swarm behavior degrades gracefully as the number of agents goes down, enabling the operator to maintain the same approach as agents become disabled or are added to the network. We demonstrate the scale-invariance and dynamic response of our system in a field relevant simulator on a variety of tactical scenarios with up to 50 agents. We also demonstrate the dynamic response of our system in the field with a smaller team of agents. Lastly, we make the code for our system available.
translated by 谷歌翻译
In this paper, we address the problem of safe trajectory planning for autonomous search and exploration in constrained, cluttered environments. Guaranteeing safe navigation is a challenging problem that has garnered significant attention. This work contributes a method that generates guaranteed safety-critical search trajectories in a cluttered environment. Our approach integrates safety-critical constraints using discrete control barrier functions (DCBFs) with ergodic trajectory optimization to enable safe exploration. Ergodic trajectory optimization plans continuous exploratory trajectories that guarantee full coverage of a space. We demonstrate through simulated and experimental results on a drone that our approach is able to generate trajectories that enable safe and effective exploration. Furthermore, we show the efficacy of our approach for safe exploration of real-world single- and multi- drone platforms.
translated by 谷歌翻译
Large pretrained Transformer-based language models like BERT and GPT have changed the landscape of Natural Language Processing (NLP). However, fine tuning such models still requires a large number of training examples for each target task, thus annotating multiple datasets and training these models on various downstream tasks becomes time consuming and expensive. In this work, we propose a simple extension of the Prototypical Networks for few-shot text classification. Our main idea is to replace the class prototypes by Gaussians and introduce a regularization term that encourages the examples to be clustered near the appropriate class centroids. Experimental results show that our method outperforms various strong baselines on 13 public and 4 internal datasets. Furthermore, we use the class distributions as a tool for detecting potential out-of-distribution (OOD) data points during deployment.
translated by 谷歌翻译
功能配准算法表示点云为函数(例如,空间占用场),避免了常规最小二乘Quares注册算法中不可靠的对应估计。但是,现有的功能注册算法在计算上很昂贵。此外,在基于CAD模型的对象本地化等任务中,必须使用未知量表的注册能力,但是功能注册中没有这种支持。在这项工作中,我们提出了一种比例不变的线性时间复杂性功能配准算法。我们通过使用正顺序基函数在功能之间的L2距离之间有效地近似实现线性时间复杂性。正统基函数的使用导致与最小二乘配准兼容的公式。受益于最小二乘的公式,我们使用翻译反转不变测量的理论来解除尺度估计,从而实现规模不变的注册。我们在标准的3D注册基准上评估了所提出的算法,称为FLS(功能最小二乘),显示FLS的数量级比最先进的功能配准算法快,而无需损害准确性和鲁棒性。 FLS还胜过基于最小二乘的最小二乘注册算法,其精度和鲁棒性具有已知和未知量表。最后,我们证明将FLS应用于具有不同密度和部分重叠的寄存点云,同一类别中不同对象的点云以及带有嘈杂RGB-D测量值的真实世界对象的点云。
translated by 谷歌翻译
事实证明,在强化学习中使用人类示范可以显着提高剂性能。但是,任何要求人手动“教”该模型的要求与强化学习的目标有些相反。本文试图通过使用通过简单使用的虚拟现实模拟收集的单个人类示例来帮助进行RL培训,以最大程度地减少人类参与学习过程的参与,同时仍保留了绩效优势。我们的方法增加了一次演示,以产生许多类似人类的演示,与深层确定性的政策梯度和事后的经验重播(DDPG + HER)相结合时,可以显着改善对简单任务的训练时间,并允许代理商解决复杂的任务(Block Block堆叠)DDPG +她一个人无法解决。该模型使用单个人类示例实现了这一重要的训练优势,需要少于一分钟的人类输入。
translated by 谷歌翻译
在物联网(IoT)支持的网络边缘(IOT)上的人工智能(AI)的最新进展已通过启用低延期性和计算效率来实现多种应用程序(例如智能农业,智能医院和智能工厂)的优势情报。但是,部署最先进的卷积神经网络(CNN),例如VGG-16和在资源约束的边缘设备上的重新连接,由于其大量参数和浮点操作(Flops),因此实际上是不可行的。因此,将网络修剪作为一种模型压缩的概念正在引起注意在低功率设备上加速CNN。结构化或非结构化的最先进的修剪方法都不认为卷积层表现出的复杂性的不同基本性质,并遵循训练放回训练的管道,从而导致其他计算开销。在这项工作中,我们通过利用CNN的固有层层级复杂性来提出一种新颖和计算高效的修剪管道。与典型的方法不同,我们提出的复杂性驱动算法根据其对整体网络复杂性的贡献选择了特定层用于滤波器。我们遵循一个直接训练修剪模型并避免计算复杂排名和微调步骤的过程。此外,我们定义了修剪的三种模式,即参数感知(PA),拖网(FA)和内存感知(MA),以引入CNN的多功能压缩。我们的结果表明,我们的方法在准确性和加速方面的竞争性能。最后,我们提出了不同资源和准确性之间的权衡取舍,这对于开发人员在资源受限的物联网环境中做出正确的决策可能会有所帮助。
translated by 谷歌翻译
许多现有人员的重新识别(RE-ID)方法取决于特征图,这些特征图可以分区以定位一个人的部分或减少以创建全球表示形式。尽管部分定位已显示出显着的成功,但它使用了基于位置的分区或静态特征模板。但是,这些假设假设零件在给定图像或其位置中的先前存在,忽略了特定于图像的信息,这些信息限制了其在挑战性场景中的可用性,例如用部分遮挡和部分探针图像进行重新添加。在本文中,我们介绍了一个基于空间注意力的动态零件模板初始化模块,该模块在主链的早期层中使用中级语义特征动态生成零件序列。遵循自发注意力的层,使用简化的跨注意方案来使用主链的人体部分特征来提取各种人体部位的模板特征,提高整个模型的判别能力。我们进一步探索零件描述符的自适应加权,以量化局部属性的缺失或阻塞,并抑制相应零件描述子对匹配标准的贡献。关于整体,遮挡和部分重新ID任务基准的广泛实验表明,我们提出的架构能够实现竞争性能。代码将包含在补充材料中,并将公开提供。
translated by 谷歌翻译